174 research outputs found

    Theories of Meaning for the Internet of Things

    Get PDF
    In this chapter, we consider the theoretical foundations for representing knowledge in the Internet of Things context. Specifically, we consider (1) the model-theoretic semantics (i.e., extensional semantics), (2) the possible-world semantics (i.e., intensional semantics), (3) the situation semantics, and (4) the cognitive/distributional semantics. Given the peculiarities of the Internet of Things, we pay particular attention to (a) perception (i.e., how to establish a connection to the world), (b) intersubjectivity (i.e., how to align world representations), and (c) the dynamics of world knowledge (i.e., how to model events). We come to the conclusion that each of the semantic theories helps in modeling specific aspects, but does not sufficiently address all three aspects simultaneously

    Applying the Web of Things Abstraction to Bluetooth Low Energy Communication

    Full text link
    We apply the Web of Things (WoT) communication pattern, i.e., the semantic description of metadata and interaction affordances, to Internet of Things (IoT) devices that rely on non-IP-based protocols, using Bluetooth Low Energy (LE) as an example. The reference implementation of the WoT Scripting API, node-wot, currently supports only IP-based application layer protocols such as HTTP and MQTT. However, a significant number of IoT devices do not communicate over IP, but via other network layer protocols, e.g. L2CAP used by Bluetooth LE. To leverage the WoT abstraction in Bluetooth Low Energy communication, we specified two ontologies to describe the capabilities of Bluetooth LE devices and transmitted binary data, considered the different interaction possibilities with the Linux Bluetooth stack BlueZ, and due to better documentation, used the D-Bus API to implement Bluetooth LE bindings in JavaScript. Finally, we evaluated the latencies of the bindings in comparison to the BlueZ tool bluetoothctl, showing that the Bluetooth LE bindings are on average about 16 percent slower than the comparison program during connection establishment and about 6 percent slower when disconnecting, but have almost the same performance during reading (about 3 percent slower).Comment: Accepted at Connected World Semantic Interoperability Workshop 2022, 8 page

    Challenges in Modeling Geospatial Provenance

    Get PDF
    The surge in availability of geospatial data sources, the increased use of crowdsourced maps and the advent of geospatial mashups have brought us to an era where geospatial information is delivered to users after integration from divers sources. Understanding the provenance of geospatial data is crucial for assessing the quality of the data and addressing whether to trust the information or not. In this paper we describe user requirements for modeling geospatial provenance

    BOLD: A Benchmark for Linked Data User Agents and a Simulation Framework for Dynamic Linked Data Environments

    Full text link
    The paper presents the BOLD (Buildings on Linked Data) benchmark for Linked Data agents, next to the framework to simulate dynamic Linked Data environments, using which we built BOLD. The BOLD benchmark instantiates the BOLD framework by providing a read-write Linked Data interface to a smart building with simulated time, occupancy movement and sensors and actuators around lighting. On the Linked Data representation of this environment, agents carry out several specified tasks, such as controlling illumination. The simulation environment provides means to check for the correct execution of the tasks and to measure the performance of agents. We conduct measurements on Linked Data agents based on condition-action rules

    Rule-based Programming of User Agents for Linked Data

    Get PDF
    While current Semantic Web languages and technologies are well-suited for accessing and integrating static data, methods and technologies for the handling of dynamic aspects – required in many modern web environments – are largely missing. We propose to use Abstract State Machines (ASMs) as the formal basis for dealing with changes in Linked Data, which is the combination of the Resource Description Framework (RDF) with the Hypertext Transfer Protocol (HTTP). We provide a synthesis of ASMs and Linked Data and show how the combination aligns with the relevant specifications such as the Request/Response communication in HTTP, the guidelines for updating resource state in the Linked Data Platform (LDP) specification, and the formal grounding of RDF in model theory. Based on the formalisation of Linked Data resources that change state over time, we present the syntax and operational semantics of a small rule-based language to specify user agents that use HTTP to interact with Linked Data as the interface to the environment. We show the feasibility of the approach in an evaluation involving the specification of automation in a Smart Building scenario, where the presented approach serves as a theoretical foundation

    How Many and What Types of SPARQL Queries can be Answered through Zero-Knowledge Link Traversal?

    Full text link
    The current de-facto way to query the Web of Data is through the SPARQL protocol, where a client sends queries to a server through a SPARQL endpoint. Contrary to an HTTP server, providing and maintaining a robust and reliable endpoint requires a significant effort that not all publishers are willing or able to make. An alternative query evaluation method is through link traversal, where a query is answered by dereferencing online web resources (URIs) at real time. While several approaches for such a lookup-based query evaluation method have been proposed, there exists no analysis of the types (patterns) of queries that can be directly answered on the live Web, without accessing local or remote endpoints and without a-priori knowledge of available data sources. In this paper, we first provide a method for checking if a SPARQL query (to be evaluated on a SPARQL endpoint) can be answered through zero-knowledge link traversal (without accessing the endpoint), and analyse a large corpus of real SPARQL query logs for finding the frequency and distribution of answerable and non-answerable query patterns. Subsequently, we provide an algorithm for transforming answerable queries to SPARQL-LD queries that bypass the endpoints. We report experimental results about the efficiency of the transformed queries and discuss the benefits and the limitations of this query evaluation method.Comment: Preprint of paper accepted for publication in the 34th ACM/SIGAPP Symposium On Applied Computing (SAC 2019

    Open City Data Pipeline

    Get PDF
    Statistical data about cities, regions and at country level is collected for various purposes and from various institutions. Yet, while access to high quality and recent such data is crucial both for decision makers as well as for the public, all to often such collections of data remain isolated and not re-usable, let alone properly integrated. In this paper we present the Open City Data Pipeline, a focused attempt to collect, integrate, and enrich statistical data collected at city level worldwide, and republish this data in a reusable manner as Linked Data. The main feature of the Open City Data Pipeline are: (i) we integrate and cleanse data from several sources in a modular and extensible, always up-to-date fashion; (ii) we use both Machine Learning techniques as well as ontological reasoning over equational background knowledge to enrich the data by imputing missing values, (iii) we assess the estimated accuracy of such imputations per indicator. Additionally, (iv) we make the integrated and enriched data available both in a we browser interface and as machine-readable Linked Data, using standard vocabularies such as QB and PROV, and linking to e.g. DBpedia. Lastly, in an exhaustive evaluation of our approach, we compare our enrichment and cleansing techniques to a preliminary version of the Open City Data Pipeline presented at ISWC2015: firstly, we demonstrate that the combination of equational knowledge and standard machine learning techniques significantly helps to improve the quality of our missing value imputations; secondly, we arguable show that the more data we integrate, the more reliable our predictions become. Hence, over time, the Open City Data Pipeline shall provide a sustainable effort to serve Linked Data about cities in increasing quality.Series: Working Papers on Information Systems, Information Business and Operation
    corecore